45 research outputs found

    Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack

    Full text link
    The multi-armed bandit formalism has been extensively studied under various attack models, in which an adversary can modify the reward revealed to the player. Previous studies focused on scenarios where the attack value either is bounded at each round or has a vanishing probability of occurrence. These models do not capture powerful adversaries that can catastrophically perturb the revealed reward. This paper investigates the attack model where an adversary attacks with a certain probability at each round, and its attack value can be arbitrary and unbounded if it attacks. Furthermore, the attack value does not necessarily follow a statistical distribution. We propose a novel sample median-based and exploration-aided UCB algorithm (called med-E-UCB) and a median-based ϵ\epsilon-greedy algorithm (called med-ϵ\epsilon-greedy). Both of these algorithms are provably robust to the aforementioned attack model. More specifically we show that both algorithms achieve O(logT)\mathcal{O}(\log T) pseudo-regret (i.e., the optimal regret without attacks). We also provide a high probability guarantee of O(logT)\mathcal{O}(\log T) regret with respect to random rewards and random occurrence of attacks. These bounds are achieved under arbitrary and unbounded reward perturbation as long as the attack probability does not exceed a certain constant threshold. We provide multiple synthetic simulations of the proposed algorithms to verify these claims and showcase the inability of existing techniques to achieve sublinear regret. We also provide experimental results of the algorithm operating in a cognitive radio setting using multiple software-defined radios.Comment: Published at AAAI'2

    Trends in forensic microbiology: From classical methods to deep learning

    Get PDF
    Forensic microbiology has been widely used in the diagnosis of causes and manner of death, identification of individuals, detection of crime locations, and estimation of postmortem interval. However, the traditional method, microbial culture, has low efficiency, high consumption, and a low degree of quantitative analysis. With the development of high-throughput sequencing technology, advanced bioinformatics, and fast-evolving artificial intelligence, numerous machine learning models, such as RF, SVM, ANN, DNN, regression, PLS, ANOSIM, and ANOVA, have been established with the advancement of the microbiome and metagenomic studies. Recently, deep learning models, including the convolutional neural network (CNN) model and CNN-derived models, improve the accuracy of forensic prognosis using object detection techniques in microorganism image analysis. This review summarizes the application and development of forensic microbiology, as well as the research progress of machine learning (ML) and deep learning (DL) based on microbial genome sequencing and microbial images, and provided a future outlook on forensic microbiology

    Prediction of overall survival for patients with metastatic castration-resistant prostate cancer : development of a prognostic model through a crowdsourced challenge with open clinical trial data

    Get PDF
    Background Improvements to prognostic models in metastatic castration-resistant prostate cancer have the potential to augment clinical trial design and guide treatment strategies. In partnership with Project Data Sphere, a not-for-profit initiative allowing data from cancer clinical trials to be shared broadly with researchers, we designed an open-data, crowdsourced, DREAM (Dialogue for Reverse Engineering Assessments and Methods) challenge to not only identify a better prognostic model for prediction of survival in patients with metastatic castration-resistant prostate cancer but also engage a community of international data scientists to study this disease. Methods Data from the comparator arms of four phase 3 clinical trials in first-line metastatic castration-resistant prostate cancer were obtained from Project Data Sphere, comprising 476 patients treated with docetaxel and prednisone from the ASCENT2 trial, 526 patients treated with docetaxel, prednisone, and placebo in the MAINSAIL trial, 598 patients treated with docetaxel, prednisone or prednisolone, and placebo in the VENICE trial, and 470 patients treated with docetaxel and placebo in the ENTHUSE 33 trial. Datasets consisting of more than 150 clinical variables were curated centrally, including demographics, laboratory values, medical history, lesion sites, and previous treatments. Data from ASCENT2, MAINSAIL, and VENICE were released publicly to be used as training data to predict the outcome of interest-namely, overall survival. Clinical data were also released for ENTHUSE 33, but data for outcome variables (overall survival and event status) were hidden from the challenge participants so that ENTHUSE 33 could be used for independent validation. Methods were evaluated using the integrated time-dependent area under the curve (iAUC). The reference model, based on eight clinical variables and a penalised Cox proportional-hazards model, was used to compare method performance. Further validation was done using data from a fifth trial-ENTHUSE M1-in which 266 patients with metastatic castration-resistant prostate cancer were treated with placebo alone. Findings 50 independent methods were developed to predict overall survival and were evaluated through the DREAM challenge. The top performer was based on an ensemble of penalised Cox regression models (ePCR), which uniquely identified predictive interaction effects with immune biomarkers and markers of hepatic and renal function. Overall, ePCR outperformed all other methods (iAUC 0.791; Bayes factor >5) and surpassed the reference model (iAUC 0.743; Bayes factor >20). Both the ePCR model and reference models stratified patients in the ENTHUSE 33 trial into high-risk and low-risk groups with significantly different overall survival (ePCR: hazard ratio 3.32, 95% CI 2.39-4.62, p Interpretation Novel prognostic factors were delineated, and the assessment of 50 methods developed by independent international teams establishes a benchmark for development of methods in the future. The results of this effort show that data-sharing, when combined with a crowdsourced challenge, is a robust and powerful framework to develop new prognostic models in advanced prostate cancer.Peer reviewe

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    PER-ETD: A Polynomially Efficient Emphatic Temporal Difference Learning Method

    Full text link
    Emphatic temporal difference (ETD) learning (Sutton et al., 2016) is a successful method to conduct the off-policy value function evaluation with function approximation. Although ETD has been shown to converge asymptotically to a desirable value function, it is well-known that ETD often encounters a large variance so that its sample complexity can increase exponentially fast with the number of iterations. In this work, we propose a new ETD method, called PER-ETD (i.e., PEriodically Restarted-ETD), which restarts and updates the follow-on trace only for a finite period for each iteration of the evaluation parameter. Further, PER-ETD features a design of the logarithmical increase of the restart period with the number of iterations, which guarantees the best trade-off between the variance and bias and keeps both vanishing sublinearly. We show that PER-ETD converges to the same desirable fixed point as ETD, but improves the exponential sample complexity of ETD to be polynomials. Our experiments validate the superior performance of PER-ETD and its advantage over ETD.Comment: Published as a conference paper at ICLR 202

    Data DependentIndependent Acquisition (DDIA) Proteomics

    No full text

    Crafting Monocular Cues and Velocity Guidance for Self-Supervised Multi-Frame Depth Learning

    No full text
    Self-supervised monocular methods can efficiently learn depth information of weakly textured surfaces or reflective objects. However, the depth accuracy is limited due to the inherent ambiguity in monocular geometric modeling. In contrast, multi-frame depth estimation methods improve depth accuracy thanks to the success of Multi-View Stereo (MVS), which directly makes use of geometric constraints. Unfortunately, MVS often suffers from texture-less regions, non-Lambertian surfaces, and moving objects, especially in real-world video sequences without known camera motion and depth supervision. Therefore, we propose MOVEDepth, which exploits the MOnocular cues and VElocity guidance to improve multi-frame Depth learning. Unlike existing methods that enforce consistency between MVS depth and monocular depth, MOVEDepth boosts multi-frame depth learning by directly addressing the inherent problems of MVS. The key of our approach is to utilize monocular depth as a geometric priority to construct MVS cost volume, and adjust depth candidates of cost volume under the guidance of predicted camera velocity. We further fuse monocular depth and MVS depth by learning uncertainty in the cost volume, which results in a robust depth estimation against ambiguity in multi-view geometry. Extensive experiments show MOVEDepth achieves state-of-the-art performance: Compared with Monodepth2 and PackNet, our method relatively improves the depth accuracy by 20% and 19.8% on the KITTI benchmark. MOVEDepth also generalizes to the more challenging DDAD benchmark, relatively outperforming ManyDepth by 7.2%. The code is available at https://github.com/JeffWang987/MOVEDepth

    An Ecological, Power Lean, Comprehensive Marketing Evaluation System Based on DEMATEL–CRITIC and VIKOR: A Case Study of Power Users in Northeast China

    No full text
    The reduction of carbon emissions in the power industry will play a vital role in global decarbonization. The power industry has three main strategies to achieve this reduction in emissions: to implement lean marketing strategies that effectively target users of power and encourage them to adopt decarbonizing technologies and services; to optimize the efficiency of these users of power; and to improve the efficiency of renewable energy sources. This paper establishes a comprehensive evaluation system of indexed data from power industry customers for the development of lean marketing strategies. This system evaluates indexes derived from customer data on renewable energy sources, carbon emissions, energy efficiency, and customer credit. It adopts the DEMATEL–CRITIC combination weight assignment and VIKOR method for system evaluation and conducts simulation experiments on customer data in a region of Northeastern China to give an example of how this method could be applied in practice to lean marketing. The results show that the evaluation system proposed in this paper can govern the lean marketing decision-making of power sales enterprises

    Ternary regulation mechanism of Rhizoma drynariae total flavonoids on induced membrane formation and bone remodeling in Masquelet technique

    No full text
    Context Rhizoma drynariae total flavonoids (RDTF) are used to treat fractures. CD31hiEmcnhi vessels induced by PDGF-BB secreted by osteoclast precursors, together with osteoblasts and osteoclasts, constitute the ternary regulatory mechanism of bone tissue reconstruction. Objective This study aimed to determine whether RDTF can promote bone tissue remodeling and induce membrane growth in the rat Masquelet model and to explore its molecular mechanism based on the ternary regulation theory. Methods Thirty-six SD rats were randomized to three groups: blank, induced membrane, and RDTF treatment (n = 12/group). The gross morphological characteristics of the new bone tissue were observed after 6 weeks. Sixty SD rats were also randomized to five groups: blank, induction membrane, low-dose RDTF, medium-dose RDTF, and high-dose RDTF (n = 12/group). After 4 weeks, immunohistochemistry and western blot were used to detect the expression of membrane tissue-related proteins. The mRNA expression of key factors of ternary regulation was analyzed by qRT-PCR. Results RDTF positively affected angiogenesis and bone tissue reconstruction in the bone defect area. RDTF could upregulate the expression of key factors (PDGF-BB, CD31, and endomucin), VEGF, and HMGB1 mRNA and proteins in the ternary regulation pathway. Discussion and conclusion Although the expected CD31hiEmcnhi vessels in the induction membrane were not observed, this study confirmed that RDTF could promote the secretion of angiogenic factors in the induced membrane. The specific mechanisms still need to be further studied
    corecore